993 research outputs found

    Optimising Trade-offs Among Stakeholders in Ad Auctions

    Full text link
    We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities. We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions. Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.Comment: 18 pages, 10 figures, ACM Conference on Economics and Computation 201

    Pricing, competition and content for internet service providers

    Get PDF
    We examine competition between two Internet Service Providers (ISPs), where the first ISP provides basic Internet service, while the second ISP provides Internet service plus content, i.e., enhanced service , where the first ISP can partner with a Content Provider to provide the same content as the second ISP. When such a partnering arrangement occurs, the Content Provider pays the first ISP a transfer price for delivering the content. Users have heterogeneous preferences, and each in general faces three options: (1) buy basic Internet service from the first ISP; (2) buy enhanced service from the second ISP; or (3) buy enhanced service jointly from the first ISP and the Content Provider. We derive results on the existence and uniqueness of a Nash equilibrium, and provide closed-form expressions for the prices, user masses, and profits of the two ISPs and the Content Provider. When the first ISP has the ability to choose the transfer price, then when congestion is linear in the load, it is never optimal for the first ISP to set a negative transfer price in the hope of attracting more revenue from additional customers desiring enhanced service. Conversely, when congestion is sufficiently super-linear, the optimal strategy for the first ISP is either to set a negative transfer price (subsidizing the Content Provider) or to set a high transfer price that shuts the Content Provider out of the market

    Development of a Mid-Infrared Sea and Lake Ice Index (MISI) Using the GOES Imager

    Get PDF
    An automated ice-mapping algorithm has been developed and evaluated using data from the GOES-13 imager. The approach includes cloud-free image compositing as well as image classification using spectral criteria. The algorithm uses an alternative snow index to the Normalized Difference Snow Index (NDSI). The GOES-13 imager does not have a 1.6 µm band, a requirement for NDSI; however, the newly proposed Mid-Infrared Sea and Lake Ice Index (MISI) incorporates the reflective component of the 3.9 µm or mid-infrared (MIR) band, which the GOES-13 imager does operate. Incorporating MISI into a sea or lake ice mapping algorithm allows for mapping of thin or broken ice with no snow cover (nilas, frazil ice) and thicker ice with snow cover to a degree of confidence that is comparable to other ice mapping products. The proposed index has been applied over the Great Lakes region and qualitatively compared to the Interactive Multi-sensor Snow and Ice Mapping System (IMS), the National Ice Center ice concentration maps and MODIS snow cover products. The application of MISI may open additional possibilities in climate research using historical GOES imagery. Furthermore, MISI may be used in addition to the current NDSI in ice identification to build more robust ice-mapping algorithms for the next generation GOES satellites

    Dwelling on the Negative: Incentivizing Effort in Peer Prediction

    Get PDF
    Agents are asked to rank two objects in a setting where effort is costly and agents differ in quality (which is the probability that they can identify the correct, ground truth, ranking). We study simple output-agreement mechanisms that pay an agent in the case she agrees with the report of another, and potentially penalizes for disagreement through a negative payment. Assuming access to a quality oracle, able to determine whether an agent's quality is above a given threshold, we design a payment scheme that aligns incentives so that agents whose quality is above this threshold participate and invest effort. Precluding negative payments leads the expected cost of this quality-oracle mechanism to increase by a factor of 2 to 5 relative to allowing both positive and negative payments. Dropping the assumption about access to a quality oracle, we further show that negative payments can be used to make agents with quality lower than the quality threshold choose to not to participate, while those above continue to participate and invest effort. Through the appropriate choice of payments, any design threshold can be achieved. This self-selection mechanism has the same expected cost as the cost-minimal quality-oracle mechanism, and thus when using the self-selection mechanism, perfect screening comes for free.Engineering and Applied Science

    Bayesian forecasting with state space models

    Get PDF
    This thesis explores the use of State-Space models in Time Series Analysis and Forecasting, with particular reference to the Dynamic Linear Model (DLM) introduced by Harrison and Stevens. Concepts from Control Theory are employed, especially those of observability, controllability and filtering, together with Bayesian inference and classical forecasting methodology. First, properties of state-space models which depart from the usual Gaussian assumptions are examined, and the predictive consequences of such models are developed. These models can lead to new phenomena, for example it is shown that for a wide class of models which have a suitably defined steady evolution the usual properties of classical steady models (such as exponentially weighted moving averages) do not apply. Secondly, by considering the forecast functions, equivalence theorems are proved for DLMs in the steady state and stationary Box-Jenkins models. These theorems are then extended to include both time-varying and non-stationary models thus establishing a very general predictor equivalence. However it is shown that intuitively appealing DLMs which have diagonal covariance matrices are restricted by only covering part of the equivalent stability / invertibility region, and examples are given to illustrate these points. Thirdly, some problems of inference involving state-space models are looked at, and new approaches outlined. A class of collapsing procedures based upon a distance measure between posterior components is introduced. This allows the use of non-normal errors or Harrison-Stevens Class II models by condensing the normal-mixture posterior distribution to prevent an explosion of information with time, and avoids some of the problems of the Harrison-Stevens solution. Finally, some examples are given to illustrate the way in which some of these models and collapsing procedures might be used in practice.<p
    • …
    corecore